Sparsely Connected Convolutional Networks

نویسندگان

  • Ligeng Zhu
  • Ruizhi Deng
  • Zhiwei Deng
  • Greg Mori
  • Ping Tan
چکیده

Residual learning [6] with skip connections permits training ultra-deep neural networks and obtains superb performance. Building in this direction, DenseNets [7] proposed a dense connection structure where each layer is directly connected to all of its predecessors. The densely connected structure leads to better information flow and feature reuse. However, the overly dense skip connections also bring about the problems of potential risk of overfitting, parameter redundancy and large memory consumption. In this work, we analyze the feature aggregation patterns of ResNets and DenseNets under a uniform aggregation view framework. We show that both structures densely gather features from previous layers in the network but combine them in their respective ways: summation (ResNets) or concatenation (DenseNets). We compare the strengths and drawbacks of these two aggregation methods and analyze their potential effects on the networks’ performance. Based on our analysis, we propose a new structure named SparseNets which achieves better performance with fewer parameters than DenseNets and ResNets.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sparsely-Connected Neural Networks: Towards Efficient VLSI Implementation of Deep Neural Networks

Recently deep neural networks have received considerable attention due to their ability to extract and represent high-level abstractions in data sets. Deep neural networks such as fully-connected and convolutional neural networks have shown excellent performance on a wide range of recognition and classification tasks. However, their hardware implementations currently suffer from large silicon a...

متن کامل

Wards Efficient Vlsi Implementation of Deep Neural Networks

Recently deep neural networks have received considerable attention due to their ability to extract and represent high-level abstractions in data sets. Deep neural networks such as fully-connected and convolutional neural networks have shown excellent performance on a wide range of recognition and classification tasks. However, their hardware implementations currently suffer from large silicon a...

متن کامل

Merging and Evolution: Improving Convolutional Neural Networks for Mobile Applications

Compact neural networks are inclined to exploit “sparsely-connected” convolutions such as depthwise convolution and group convolution for employment in mobile applications. Compared with standard “fully-connected” convolutions, these convolutions are more computationally economical. However, “sparsely-connected” convolutions block the inter-group information exchange, which induces severe perfo...

متن کامل

CayleyNets: Graph Convolutional Neural Networks with Complex Rational Spectral Filters

The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on gr...

متن کامل

Noise Injection Into Inputs In Sparsely Connected Hopfield And Winner-take-all Neural Networks - Systems, Man and Cybernetics, Part B, IEEE Transactions on

In this paper, we show that noise injection into inputs in unsupervised learning neural networks does not improve their performance as it does in supervised learning neural networks. Specifically, we show that training noise degrades the classification ability of a sparsely connected version of the Hopfield neural network, whereas the performance of a sparsely connected winner-take-all neural n...

متن کامل

Noise injection into inputs in sparsely connected Hopfield and winner-take-all neural networks

In this paper, we show that noise injection into inputs in unsupervised learning neural networks does not improve their performance as it does in supervised learning neural networks. Specifically, we show that training noise degrades the classification ability of a sparsely connected version of the Hopfield neural network, whereas the performance of a sparsely connected winner-take-all neural n...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1801.05895  شماره 

صفحات  -

تاریخ انتشار 2018